support millisecond timestamps in iceberg compatibility mode#6352
Merged
Conversation
Signed-off-by: Max Falk <gfalk@yelp.com>
Contributor
|
Please fix |
Signed-off-by: Max Falk <gfalk@yelp.com>
Signed-off-by: Max Falk <gfalk@yelp.com>
Contributor
Author
Done, i've also updated the docs |
Contributor
Author
|
@JingsongLi good to merge? |
0dunay0
approved these changes
Oct 1, 2025
JingsongLi
reviewed
Oct 13, 2025
| "Paimon Iceberg compatibility only support timestamp type with precision from 4 to 6."); | ||
| timestampPrecision >= 3 && timestampPrecision <= 6, | ||
| "Paimon Iceberg compatibility only support timestamp type with precision from 3 to 6."); | ||
| return Timestamp.fromMicros(timestampLong); |
Contributor
There was a problem hiding this comment.
If precision is 3, this long should be a millis.
* master: (162 commits) [Python] Rename to BATCH_COMMIT_IDENTIFIER in snapshot.py [Python] Suppport multi prepare commit in the same TableWrite (apache#6526) [spark] Fix drop temporary view (apache#6529) [core] skip validate main branch before orphan files cleaning (apache#6524) [core][spark] Introduce upper transform (apache#6521) [Python] Keep the variable names of Identifier consistent with Java (apache#6520) [core] Remove hash lookup to simplify interface (apache#6519) [core][format] Format Table plan partitions should ignore hidden & illegal dirs (apache#6522) [hotfix] Print partition spec and type when error in InternalRowPartitionComputer [hotfix] Add more informat to check partition spec in InternalRowPartitionComputer [hotfix] Use deleteDirectoryQuietly in TempFileCommitter.clean [core] format table: support write file in _temporary at first (apache#6510) [core] Support non null column with write type (apache#6513) [core][fix] Blob with rolling file failed (apache#6518) [core][rest] Support schema validation and infer for external paimon table (apache#6501) [hotfix] Correct visitors for TransformPredicate [hotfix] Rename to copy from withNewInputs in TransformPredicate [core][spark] Support push down transform predicate (apache#6506) [spark] Implement SupportsReportStatistics for PaimonFormatTableBaseScan (apache#6515) [docs] add docs for auto-clustering of historical partitions (apache#6516) ...
Signed-off-by: Max Falk <gfalk@yelp.com>
Signed-off-by: Max Falk <gfalk@yelp.com>
Contributor
|
+1 |
jerry-024
added a commit
to jerry-024/paimon
that referenced
this pull request
Dec 29, 2025
* upstream/master: (51 commits) [test] Fix unstable test: handle MiniCluster shutdown gracefully in collect method (apache#6913) [python] fix ray dataset not lazy loading issue when parallelism = 1 (apache#6916) [core] Refactor ExternalPathProviders abstraction [spark] fix Merge Into unstable tests (apache#6912) [core] Enable Entropy Inject for data file path to prevent being throttled by object storage (apache#6832) [iceberg] support millisecond timestamps in iceberg compatibility mode (apache#6352) [spark] Handle NPE for pushdown aggregate when a datasplit has a null max/min value (apache#6611) [test] Fix unstable case testLimitPushDown [core] Refactor row id pushdown to DataEvolutionFileStoreScan [spark] paimon-spark supports row id push down (apache#6697) [spark] Support compact_database procedure (apache#6328) (apache#6910) [lucene] Fix row count in IndexManifestEntry [test] Remove unstable test: AppendTableITCase.testFlinkMemoryPool [core] Refactor Global index writer and reader for Btree [core] Minor refactor to magic number into footer [core] Support btree global index in paimon-common (apache#6869) [spark] Optimize compact for data-evolution table, commit multiple times to avoid out of memory (apache#6907) [rest] Add fromSnapshot to rollback (apache#6905) [test] Fix unstable RowTrackingTestBase test [core] Simplify FileStoreCommitImpl to extract some classes (apache#6904) ...
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Purpose
This adds to #4318 by allowing Paimon millisecond type to be converted to the canonical microseconds in Iceberg.
We ingest data from Kafka and other sources into Paimon and our Iceberg-based Data Lake. Kafka timestamps and some business data has timestamps with millisecond precision. This allows supporting that type of data without any type changes required in the ingestion pipeline.
Tests
IcebergConversionsTimestampTest.java